►
From YouTube: Fluent Talks | 001 | Calyptia Enterprise + OpenSearch
Description
Please join us for Fluent Talks! Our weekly webinar and office hours, on Fridays at 2PM Central. Streaming live on YouTube.
This week we'll 1) Provide an overview of Calyptia Enterprise, 2) Discuss the upcoming Fluent Bit 1.9 Release, 3) Talk about our partnership with the OpenSearch team, and 4) Detail our recent Fluent Bit release for Red Hat OpenSearch
#opensearch
B
B
C
Cool,
well,
hey
everyone!
Thanks
for
for
joining
us!
This
is
fluent
talks.
It's
I
could
stream
live
on
on
fridays.
Have
everyone
join
go
over
a
couple
topics,
but
you
know
if
you
have
questions
and
thoughts
like
we're
happy
to
answer.
C
This
is
really
just
a
time
for
for
folks
to
get
together
showcase,
some
of
the
cool
stuff,
that's
going
on
within
the
pool
and
ecosystem
cool
stuff
going
on
within
calyptia
and
and
really
answer,
questions
or
or
ideas
or
or
topics
that
the
community
might
have
so
for
today
we
thought
we'd
load
up
with
a
couple
of
of
cool
topics
here
for
table
discussion.
I'll,
do
a
quick
tour
of
what
clyptia
does
what
we
have.
What
you
can
try
out
today
go
over
flip.
It
1.9
new
release.
C
Upcoming
really
excited,
for
it
has
a
bunch
of
cool
features,
some
new
stuff
around
the
flint
flintbit
website,
which
is
updated
at
this
point,
so
glad
to
to
get
that
out
there.
I
have
fluent
partnership
with
open
search
stuff
that
we've
done
from
the
clip
to
side
what
that
entails,
and
then
red
hat
open
shift
and
some
of
the
certified
image
stuff
that
we
have
going
on.
C
So
with
that,
let
me
start
off
with
the
calyptia
quick
tour,
who
we
are,
what
we
do
so
for
for
us
really
we're
a
company
that
was
created
out
of
the
fluenty
and
fluent
bit
open
source
projects.
We
saw
those
projects
getting
a
lot
of
traction,
they're
part
of
the
cloud
native
computing
foundation,
and
we
would
continually
get
questions
about
hey.
How
do
we
run
this
at
high
scale?
How
do
we
do
xyz?
C
How
do
we
integrate
it
with
all
these
data
sources
or
data
back-ends?
That
might
be,
you
know,
less
conventional
and
what
you
might
find
in
open
source?
And
we
said:
oh,
we
can
we
can
craft
this
this
company
and
help
really
support
the
users
that
are
leveraging
this.
C
We
think
of
it
as
this
journey
that
you
have
to
take,
as
as
you
want
to
go
and
bring
insights
from
all
of
this
machine
data
or
your
container
data,
whatever
it
may
be,
and
first
mile
is
really
saying
in
that
journey.
We
can
help
you
own
and
control
the
way
that
you
collect
that
data.
So
if
you
need
to
route
that
to
specific
backends,
you
need
to
route
that
you
need
to
modify
it
in
stream.
How
can
we
really
help
users?
C
Now,
what
we
do
offer
is
a
clipto
cloud
which
is
a
free
service.
You
can
sign
up
via
github
or
google.
It
has
a
lot
of
dev
tools
that
make
it
useful.
When
you
want
to
validate,
make
sure
things
are
running
properly.
I
need
to
do
some
regular
expressions.
So
just
as
like
a
quick
example
here
we
have
some
some
examples
where
you
want,
maybe
to
add
hostname
from
a
log.
That's
streaming,
so
we
can
visualize
that
configuration.
C
You
can
paste
your
configuration
here
and
we
can
visualize
it
it'll
go
ahead
and
check.
So
if
there's
particular
pieces
there
that
not
might
not
be
running
correctly
or
not
defined
correctly.
So,
for
example,
if
we
look
at
say,
I
believe
this
one
has
some
warnings
in
it.
We
can
say
hey.
You
need
to
set
a
parser
here
in
syslog.
C
That's
not
going
to
run
correctly
or
in
the
splunk
output
plug-in
message.
Key
is
actually
not
an
option,
so
this
configuration
wouldn't
run
correctly,
and
this
can
really
be
helpful,
as
you
think
about
deploying
something
at
scale
or
across
your
kubernetes
environment.
Instead
of
having
to
deploy
check,
you
can
run
just
quickly
on
the
web
copy
paste
and
see
if
everything
goes
well.
C
C
You
might
want
to
make
that
that
data
into
something
that's
more
parsable,
more
actionable
and
maybe
more
useful
when
it
hits
the
back
end
like
a
splunk
or
elasticsearch
or
data
dog,
and
some
examples
that
are
very
popular,
like
aspects
like
engine
x
logs
where
we
could
say:
hey,
let's
take
this
nginx
log,
let's
throw
it
in
and
see
hey.
How
do
we
get
some
fields
on
top
of
it?
So
here
you
can
see
what
it
will
look
like.
So
again
you
don't
have
to
deploy
and
check.
C
You
can
really
just
run
and
see
what
what
this
will
look
like
and
what
we
could
then
do
is
say.
I
don't
actually
care
about
any
of
these
200
codes.
I
only
care
about
404s
or
500s
and
we
can
grep
based
off
of
these
specific
message
keys,
which
is
super
useful.
So
this
is
another
piece
that
we
have
within
ecliptic
cloud
as
well.
You
can
connect
up
your
fluenty
and
fluidbit
instances,
so
you
can
view
things
like
your
config
history.
C
So
if
I
go
into
my
monitoring
overview,
I
can
see
how
many
events
per
second
I'm
sending,
and
I
can
look
at
things
like
you
know
what
my
average
bytes
per
second
are
and
then,
if
I
scroll
down,
I
have
the
instances
that
are
reporting
the
ones
that
are
not,
and
if
I
go
check
some
of
these
instances,
I
can
see
say
the
configuration
what
the
history
might
look
like
on
particular
days
and
then
what
my
plugins
are
optimizations
here,
I'm
not
really
running
anything
but
just
checking
the
health
of
of
this.
C
D
C
Is
a
bit
about
klyptic
cloud
and
and
what
we
offer
things
you
can
start
using
immediately
value
on
top
of
flip
it
fluentd
and
yeah.
Let's
go
back
to
the
agenda,
you
know
any
any
comments,
pat
at
wardo
other
folks
that
might
want
to
chime
in
yeah.
B
I
just
want
to
to
add
that
as
kalitia
also,
we
are
besides
what
we
have
seen
now
ways
to
to
configure
one
bit
validate
configuration,
we're
creating
and
achieving
soon
a
bunch
of
a
developer's
tools
for
the
fluent
ecosystem,
and
some
of
I'm
going
to
the
sneak
peek
like
we
will
have
it
have
actions
linters
for
configurations,
so
we're
trying
to
make
sure
that
when
you
deploy
your
staff
on
on
production,
you
have
all
the
automation
apis
to
get
started.
Actually,
one
of
the
common
frustration
is
a
okay.
I
have
this
configuration.
B
I
don't
know
how
it
will
work,
I'm
going
to
deploy
the
production.
Things
goes
wrong.
You
have
to
roll
back
and
it's
always
a
common
mess
right.
We
all
have
been
there
even
us,
so
right
now,
as
a
company,
we're
trying
to
fix
this,
we're
going
to
release
a
bunch
of
free
tools,
for
you
guys
so
just
feel
free
to
sign
up
to
calyptia
play
with
the
tools,
but
also
we're
going
to
have
other
kinds
offering
where
kind
of
more
product
enterprises
for
kubernetes.
You
will
hear
more
more
about
that
in
a
few
weeks.
D
Yeah,
so
a
lot
of
the
stuff,
I
think
that's
coming
through
the
pipeline
is
probably
well,
I
wouldn't
say
entirely
from
my
complaints,
but
maybe
based
in
them
and
directed
by
them,
but
yeah.
My
my
previous
role,
I
I
sort
of
was
deploying
fluent
bit
for
for
customers
really
as
part
of
an
operator,
and
there
was
quite
a
lot
of.
I
was
very
much
on
the
dev
side,
so
it
was
like
it
was
not
the
operation,
so
it
was.
He
was
trying
to
find
yeah
we're
trying
to
improve
that
kind
of
experience.
D
Quite
a
bit.
You
know
how?
How
can
we
test
stuff?
How
can
we
link
to
it,
as
you
touched
on
all
these
kind
of
things?
So
you
can.
You
know,
shift
stuff
left
before
you
get
before
you
get
to
production
and
and
do
that
kind
of
stuff,
so
yeah
there's
some
there's
some
cool
stuff
coming.
I
mean
I'm
also
this
week,
starting
to
learn
typescript
as
well
to
do
some
of
that
as
well.
So
yeah,
it's
all
a
bit
exciting
and
terrifying,
but
yeah
awesome.
D
B
B
A
lot
of
lack
of
tooling
ecosystem
and-
and
I
think
one
of
the
objectives
that
kalitia
is
like
filled
all
these
gaps
right.
There
are
many
observability
tools,
but
I
think
that
we're
missing
is
the
observability
ecosystem
for
developers
and
that's
where
kalitia
you
know.
I
need
to
build
all
these
bridges
for
you
and
yeah.
D
And
probably
things
like
automation
as
well,
so
that's
like
a
big
yeah
for
me
like
for
the
last
five
ten
years
it's
been
like
driving
towards
like
doing
automation.
You
know
making
sure
you
can
prove
it
test
it
and
demonstrate
it
reliably,
those
kind
of
those
kind
of
things
as
well
so
yeah.
Let's
get
into
that.
B
Okay,
so
I
don't
know
if
we
have
questions.
This
is
our
first
quick
tour
right.
Our
fresh
fluent
talk
actually
right
now
we're
just
people
from
calicia,
but
we
would
like
to
invite
other
people
from
the
community
also
to
expose
different
technical
topics
associated
with
observability.
B
This
is
not
just
self-speech
right,
so
the
thing
here
is
to
to
create
community,
and
most
of
you
come
from
the
fluent
ecosystem
so
happy
to
have
you
here.
If
you
have
any
questions,
let
us
know
we're
monitoring
the
slack
and
the
youtube
channel,
but
yeah
we
will
take
it
from
there.
I
don't
know
if
you
have
any
questions
tim
around.
A
And
nothing
yet,
but
I
did
put
the
link
for
calyptia
cloud,
so
people
can
register
there,
try
out
the
product
for
themselves,
understand
the
value
and,
as
eduardo
said,
we'd
love
to
get
your
feedback.
I
think
the
product's
very
powerful
already
we've
got
a
lot
of
people
using
the
product
in
production,
but
there's
a
lot
to
improve,
so
we
definitely
want
to
get
your
input
yeah.
Absolutely.
C
Okay,
well,
let's,
let's
switch
into
the
second
second
one,
which
I
think
is
really
exciting,
is
flintbit
1.9,
what's
what's
kind
of
it?
Where
do
you
want
to?
You
want
to
talk
about
that?
One.
B
B
B
So
I'm
going
to
my
fluent
bit
repository
here,
and
actually
this
is
running
what
a
night
right
so
199.
One
of
the
interesting
thing
is
like
we
come
up
with
a
new.
Well,
one
of
the
major
things
now
is
like
we're:
extending
our
configuration
a
format
now,
besides
support
the
fluid
bit,
which
we
call
classic
configuration
mode,
we
are
now
optionally
start
supporting
yaml,
so
we
have
a
kind
of
parity
between
one
configuration
and
the
other.
Now,
for
example,
I'm
going
to
show
the
unit
the
unit
test
cases
for
for
this.
B
For
this
stuff,
for
example,
the
new
configuration
format
will
looks
like
this
and
slow
embed
the
jumbo
there
you
go
so
the
new
gmo
schema
that
we
have
a
aims
to
mimic
what
we
have
in
the
classic
mode
but,
for
example,
with
respect
the
kind
of
local
environment
variables
which
can
be
from
local
resolution.
This
used
to
be
in
the
classic
mode
called
set,
or
maybe
we
can
try
to
do
this.
D
B
B
There
we
go
so,
as
you
can
see
in
in
the
classic
format
we
have
the
is
what
you
call
metacommands,
which
is
a
add
set
in
order
to
set
and
a
variable
a
equals
one
b
equal
to
in
the
java
format.
Now
we
come
up
with
the
same
thing
functionality,
but
it's
called
m,
so
whatever
we
put
here
as
a
key
value
pair
will
could
be
expanded
later
in
the
configuration.
B
Also,
we
support
in
the
classic
mode
what
it
includes
right,
so
the
ability
to
include
files
nowadays,
nobody
has
just
one
big
conflict
file,
but
we
used
to
have
that
years
before,
but
now
most
people
just
want
to
have
a
different
files
with
for
input
different
file
with
all
the
filters,
all
the
outputs
and
just
do
an
include
of
them
right
so
maintenance
and
from
a
deployment
perspective,
make
things
easier
now
in
jamo.
We
also
support
the
same
functionality
through
the
includes
a
key
and
you
just.
This
is
like
an
array
right.
B
You
just
put
one
more
lines
with
the
new
files
that
you
want
to
extend
now.
If
you
want
to
have
also
different
kind
of
plugins
right,
you
might
hear
about
input,
filters,
outputs
and
so
on.
Now
we
have
another
category
flagging,
which
is
called
custom
plugins
custom
plugins
are
not
part
of
the
pipeline;
usually
they
do
something
like
when
fluent
is
starting.
My
preparing
the
environment
might
create
something
right.
B
The
only
custom
plugin
that
we
have
right
now
is
a
calypto
plugin
which
allows
you
to
connect
your
running
fluent
bit
to
our
cloud,
so
you
can
get
all
the
metrics
from
the
running
agent
as
anu.
I
was
showing
before,
and
you
just
put
your
akia
api
key
and
you
will
be
able
to
see
the
throwboard
how
healthy
your
agent
is
right.
You
just
need
your
api
key
and
you're
ready
to
go,
but
that
kind
of
plug-in
it's
not
a
it's,
not
part
of
a
pipeline
right.
It's
something
that.
D
B
There
now
for
planets
that
are
part
of
the
pipeline.
If
we
compare
the
difference,
for
example
here
that
we
have
with
the
classic
mode,
everything
was
at
the
top
level
right
now
we
have
this
kind
of
a
pipeline
defined
under
the
pipeline
section,
and
this
is
basically
because
we
found
that
a
people
used
to
confuse
in
production
when
you're
reading
configuration.
B
Sometimes
you
have
a
couple
of
inputs
that
has
a
specific
tag,
a
specific
filters
with
a
specific
match,
and
all
of
them
will
match
together
right,
and
maybe
you
have
another
kind
of
input
that
will
go
to
a
different
filter
to
a
different
output
and
the
classic
mode.
That
becomes
like
a
very
big,
a
file
right,
so
here
in
the
yaml
file,
you
will
be
able
to
specify
different
pipeline
sections
so
visually
that
will
be
more
appealing
and
more
easy
to
to
process
now.
B
One
of
the
cool
things
that
I
didn't
mention
about
includes
here
is
that
we
are
including
the
file
called
service.jmo.
If
I
cut
that
file,
oh
it's
on
here,
it's
here
right.
I
will
see
that
this
file
creates
a
new
environment
variable
that
is
called
observability
with
a
valid
caliph,
so
it
replaces
a
plugin
name
here
and
also
this
file
is
included,
another
one
which
is
called
test
nested.jaml,
which
is
a
relative
position
from
the
last
one
which
will
be
test
nested
yamu
that
this
one
defines
the
service
section.
B
This
is
just
an
example,
because
it's
a
unique
test
we
want
to
make
sure
that
includes,
are
working
and
everything
is
set,
and
also
this
one.
This
is
where
we're
consuming
the
flashing
turbo
that
was
set
in
the
first
file,
and
then
we
include
another
pipeline
which
is
called
dummy
pipeline
this
one.
B
Let
me
clean
up
the
screen
right.
This
is
dummy
plugin
we
groups
and
send
it
out
to
standalone
okay,
so
1.9
is
coming
with
this.
A
this
feature
already,
and
one
of
the
other
features
that
a
we're
pretty
excited.
I
don't
know
if
we
can
make
the
disclaimer
right
now
is
the
big
one.
It's
like
now,
natively
for
1.9
and
we're
going
to
support
in
full
integration
with
open,
telemetry
right,
and
you
may
ask:
why
is.
D
B
A
B
The
company
has
in
the
infrastructure
we
can
go
there
and
interact
with
different
protocols,
different
source
of
data,
different
kind
of
destination
payloads.
So
we
become
this
kind
of
proxy
that
can
help
to
alleviate
this
problem
that
sometimes
you
just
get
stuck
to
one
single
stack
right.
If
you're
just
planning
you
have
just
gone
forward
there,
you
just
can
put
their
fluent
earphone
back
and
continue
having
the
same
experience
for
new
data
analytics
with
splunk,
but
have
more
control
over
your
data.
B
B
What
will
be
our
position
and
we
see
that
open
telemetry
is
pretty
mature
on
traces.
Matrix
is
just
a
kind
of
new,
because
just
got
stable
and
logs
is
still
a
bit
a
few
years
behind
because
of
the
the
protocol
listing.
I
think
that
we
are
not
involved
in
the
project
development,
but
we
see
that
there's
still
some
stuff
going
on
there,
but
as
a
project
and
as
a
company
is
like
okay
hope
we
can
embrace
open,
telemetry
and
help
users
to
succeed
right.
Our
experience
is
in
the
aging
site.
B
Our
experience
is
processing,
sending
data
reliable
at
a
high
performance
with
a
very
low
cost
in
terms
of
a
cpu
and
memory
right,
so
we
decided
okay,
so
many
users,
we
have
a
huge
user
base
of
one
bit
right
from
being
deployed
to
two
million
times
a
day.
So
what
is
the
value
that
we
can
bring
here
to
the
cncf
ecosystem?
B
B
Let's
help
open
telemetry
to
succeed
in
that
area
from
our
point
of
view
for
our
users
right,
because
at
the
end
of
the
day,
it's
not
about
to
replace
technology,
it's
about
the
whole,
improve
and
solve
problems
that
you
have
on
a
daily
basis
and
it's
not
a
solution
to
say
one
day
to
the
other
side,
I'm
going
to
replace
all
my
agents
are
going
to
replace
everything.
No,
that
might
take
one
two
years.
We
have
seen
that
many
times
with
the
companies
that
we
work
for
and
so
yeah
open
telemetry's
coming
our
plan.
B
Now
we
are
shipping
this
week,
that's
coming
middle.
Can
we
do
a
demo
next
week
for
the
next
flow,
and
talk
would
be
great
and
well
I'm
guessing
that
everything
will
be
ready
for
the
demo
right,
but
we
just
did
a
demo
this
morning.
So
that's
why
I
mentioned
this,
and
the
goal
will
be
that
a
next
week,
a
demo
and
and
now,
for
example,
what
we
can
do
with
open
telemetry
is
achieve
a
native
fluid
bit
matrix
because
we
do
logs
and
metrics
shift
this
negative
matrix
to
open
telemetry
and
well.
B
B
C
B
C
Metrics,
oh,
that
plug-in
there's
a
plug
output
plug-in
to
apache
sky
walking,
oh
yeah,
that
was
contributed
also.
It
was
a
great
contribution
from
I
believe,
the
folks
at
tetrate
and,
let's
see,
there's
well
the
open,
there's
an
open
search,
plug-in
coming
as
well
yeah.
That's.
D
D
C
D
C
C
D
Yeah,
the
other
thing
on
that
yama
format,
I
think,
is
the
the
kind
of
we're
going
to
support
the
the
url
rather
than
just
like
local
files
as
well.
Is
that
coming
coming
in
1.9
as
well?
You
know,
so
we
can
do
github's
kind
of
stuff
so
rather
than
say,
load
this
local
yaml
file.
You
can
load
a
yaml
file
from
your
repository
repository
or
whatever.
A
B
D
Yeah
yeah
that's
an
improvement,
but
that
would
be
cool
as
well
for
like
enterprise
customers,
yeah,
I'm
just
managing
your
stuff,
the
kafka
stuff
as
well,
pretty
pretty
straightforward,
the
there's
a
good
for
people
watching
it's
quite
a
good,
simple,
compose
stack
that
runs
it
all
up.
I
mean
I
I've
never
really
used
kafka
before
and
I
I
got
it
working
in
in
the
space
of
a
couple
of
minutes
just
just
by
running
the
compose
stuff.
So
it's
pretty
good
good
way
to
test
it.
B
So
that
does
not
mean
that
when
that
nine
is
out
we're
going
to
deprecate
a
td
agent
that
they
with
just
with
fluid
bit,
it
will
be
like
a
process
right.
So
at
this
for
some
time,
we'll
have
td
agent
bit
and
we
have
fluid
packages,
we're
going
to
crash
all
our
users
to
migrate
to
fluid
packages
and
start
the
deprecation
of
a
tt
asian
bit.
D
Yeah,
I
can
thanks
for
no
notice
at
all,
so
let
me
just
find
out,
but
yeah.
Let
me
let
me
share
my
screen.
This
is
probably.
D
D
1.8
at
the
moment
still
has
a
few
kind
of
like
manual,
workflows
and
stuff
like
that,
where
eduardo
has
to
do
specific
stuff,
you
know
on
specific,
build
machines
just
to
get
a
release
out,
which
is
it's
not
good
for
everyone
involved,
because
you
know
it's
a
bit
more
work
and
stuff
like
that.
So
what
I've
been
doing
is
setting
up
this
kind
of
staging
workflow.
So
the
idea
being
we
we
take.
What's
on
master,
we
build
it
regularly.
We
put
it
to
staging.
We
can
run
a
load
of
tests
yeah.
D
We
need
to
test
different
things.
We
need
to
test
all
the
different
packages
we
build,
so
all
the
different
targets
and
then
we
need
to
test
all
the
different
containers
we
build
as
well,
either
in
kubernetes
or
not
in
kubernetes,
and
ideally
with
the
helm
charts
as
well,
just
to
make
sure
everything's
working
as
a
user
would
do
so.
D
That's
something
some
of
the
stuff
I've
I've
been
putting
together
so
so
this
workflow
here
is
is
all
in
the
bit
repo
now,
so
it's
called
deploy
to
staging
and
that's
just
essentially
do
the
build.
So
it's
broken
into
build
and
test.
So
we
can.
It
takes
quite
a
long
time
to
do
each
bit
and
also
the
directed
graph
gets
massive
if
you
try
and
squeeze
it
all
onto
one,
but
they
they
trigger
one
triggers
the
next.
So
it's
yeah,
it's
fully
automated
timeline.
D
So
here
we
build
all
the
different
images
and-
and
you
can
see,
there's
like
19
targets
down
here
for
all
the
different
packages.
We've
also
added
stuff,
like
cosine
of
the
container
images
and
some
vulnerability
scanning
those
kind
of
kind
of
bits
and
pieces
in
there
to
improve
supply
chain
security.
So,
there's
quite
a
few
bits:
bits
of
kind
of
ci
stuff
and
things
behind
the
scene
that
you
can't
really
see
and
that
we're
trying
to
improve
stuff
with,
on
the
on
the
testing
side
right
at
the
bottom.
D
There's
there's
a
big
testing.
You
know,
I
don't
know
if
you
guys
can
see
it's
quite
it's
quite
small,
so
it
might
yeah,
let's
see
so.
This
is
all
the
different
tests
we
do
just
for
for
testing
and
release.
So
this
is
not
like
every
single
unit
test,
an
edge
case,
an
integration
test,
because
by
the
point
it
gets
merged
to
master,
they
should
have
been
run.
This
is
about
then
testing
doing
essentially
a
good
sanity
check
good.
D
You
know
good
confidence
check
that
all
of
the
different
targets
work
we're
not
testing
all
the
different
edge
cases
in
every
single
target,
because
that
would
just
consume.
You
know
more
resources
than
probably
the
universe
has
got.
So
it's
about
testing.
You
know
making
sure
that
the
packages
we're
releasing
work
and
are
built
correctly
and
signed
and
all
these
kind
of
things
as
well.
So
here's
a
few
of
the
this
is
what
the
testing
looks
like
at
this
point.
So
we
do
some
cheap,
cheap
checks.
D
First
and
then
we
move
to
more
expensive
checks
where
yeah.
If
you
like
the
container
stuff,
there
were
quite
a
few
issues
with
the
different
architectures,
because
they're
built
slightly
differently.
The
dependencies
were
getting
missed
and
stuff
like
that.
So
we
have
some
simple
sanity
checks
that
say:
does
the
container
run
and
that
picks
up
a
lot
of
the
light?
You've
missed
the
pencil
and
then
we
have
like
move
it
to
like.
Does
it
run
and
give
us
a
web
server?
D
We
can
talk
to,
for
you
know
a
minute
or
so
to
make
sure
it
stays
up,
and
then
once
we
pass
that
we
move
into
like
let's
deploy
with
helm,
make
sure
it
runs
in
a
kubernetes
environment.
There's
no
like
weird
permission,
problems
or
ports
or
any
stuff
like
that,
and
that's
that's
kind
of
how
we
step
through
and
also
you
probably
can't
see
it.
But
I
added
I've
added
some
sort
of
developer
preview,
multi-arch
containers
in-
and
these
are
like
at
the
moment
for
1.8.
D
The
production
images
are
slightly
different
depending
on
the
architecture,
so
you've
got
a
distal
list
for
the
amd64
and
then
the
two
arm
images
are
not
disturbless
and
they've
got
like
a
shell
and
a
package
manager
as
well
and
there's
no
debug
variants
of
them.
So
the
new
multi-arch
preview
stuff
builds
everything
from
a
single
docker
file.
So
all
six
container
images
are
done
by
multi-stage,
builds
in
that
docker
file,
so
you've
got
three
three
architectures
of
production,
three
architecture,
debug
and
all
the
production
images
are
now
distributed.
D
So
you
get
the
benefits
of
security
and
you
know
not
having
a
shower
and
reducing
those
kind
of
vulnerabilities
as
well.
So
those
are
all
in
here
as
well,
because
you
know,
regardless
we
test
the
production,
the
current
production
images
with
the
same
pipeline.
We
test
these
new
developer
preview
ones,
and
hopefully
we
can
move
those
those
over
to
the
pre
to
the
proper
ones.
Once
we're
done
so
that's
yeah,
that's
some
of
the
ci.
Hopefully,
it
makes
a
bit
of
sense,
there's
quite
a
lot
going
on
now.
D
It's
taken
me
a
while
to
to
get
everything
but
yeah,
there's
quite
a
few
things.
We've
got
some
security
analysis
going
on
of
the
trivia
staff
and
then
recently,
I've
added
the
open
source
security
foundation
scorecard
analysis,
which
is
like
look
at
your
look
at
your
project
and
see
what
kind
of
issues
you
might
have
with
your
your
infrastructure
set
up
your
dependencies,
those
kind
of
things
it
can
give
you
a
rating
out
of
out
of
10.
D
For
for
each
thing,
I
think
we
came
out
around
about
seven,
mostly
because
there
are
a
few
we
were
quite
high
on
most
of
them.
There
are
a
few
zeros
on
stuff
that
is
quite
straightforward
to
fix
that
that's
all
brought
average
down
so
hopefully
we'll
fix
that
in
the
next
next
few
weeks,
or
so,
oh.
D
School
card
supply
chain
security,
of
course
it
hasn't
got
the
the
token
in
it
to
actually
run
it.
Yet.
I
was
just
running
manually
behind
the
scenes,
so
yeah,
that's
some
of
the
ci
stuff.
If
I
missed
anything
that
you
guys
can
think
of.
D
So
the
current
the
current
workflow
is
like
we
want
to
do
a
release,
we
push
that
button
and
the
pipeline
goes
all
the
way
through
to
the
end,
and
now
what
we're
going
to
do
is
like
do
regular
cadence
builds
so
like
every
night.
We're
doing
that
pipeline.
We're
not
doing
the
release.
We're
saying:
oh,
is
this
okay
for
release
and
then,
at
the
end
of
some
period,
every
two
weeks,
whatever
we
can
just
go.
What
was
the
last
green
one
of
those?
D
Stick
it
out
the
door
and
it's
got
all
the
changes
in
that
we're
in
that
change,
and
you
get
like
that
kind
of
build
up
that
momentum
and
keep
things
quick
and
agile,
and
you
know
people
can
make
a
change.
It
appears-
and
you
know
it's
in
a
release
and
rather
than
this
big
bang
approach
of
having
to
manage
it
all
make
sure
it's
all
tested.
So
hopefully
that'll
improve
things
as
well.
You
know
it's,
it's
just
a
case
of
adding
more
tests
and
automating
it
and
we're
trying
to
do
some.
D
So
at
the
moment
we've
got
like
integration
tests
and
we've
got
these
kind
of
smoke
tests
and
and
building
packaging
tests.
Now
we,
you
know
what
we've
got
this
pipeline.
It's
there's
good
places
now
to
hang
off
things
like
resilience,
testing
or
soap
testing.
You
know,
keep
it
running,
for
you
know
three
days,
make
sure
there's
no
kind
of
weird
problem
with
it
right.
You
know
after
24
hours,
something
weird
goes
wrong:
yeah
and
then
performance
tests
as
well.
D
C
B
You
have
to
run
ama
right.
Ask
me
anything
yeah,
but
yeah.
A
Yeah
we
can,
we
can,
you
know,
be
like
pat's,
like
the
musician
and
we'll
just
call
out
songs
and
see
what
he
can.
He
can
come
up
with
yeah.
It's
like
our
building.
B
D
Yeah
there
should
be
a
few
more
coming.
I'm
trying
to
get.
I
think,
we'll
do
a
little
bit
deeper
dive
in
some
of
the
testing
stuff,
because
it's
there's
a
lot
of
like
weird
stuff
and
like
stuff.
You
find
that
that
that
needs
improvement
or
is
a
bit
strange
to
use
like
reusable
workflows,
are
slightly
strange
at
the
moment,
so
that
there's
some
little
tweaks
and
things
we
found
while
we're
doing
them.
C
C
Yeah
and-
and
so
I
think,
that
if
you
look
at
things
that
could
be
improved
here
as
we
as
we
analyze
is
like,
maybe
a
little
easier
to
understand
the
use
cases,
just
it
more
easy
to
understand
how
you
can
you
know,
get
to
community
or
or
even
like,
if
you
want
to
download,
where
do
we
go.
C
Slight
improvements
here
and
there,
but
let's
let's
go
to
the
new
version
that
we've
been
building
so
yeah
here,
it's
trying
to
visualize
and
showcase
in
a
better
way
how
we're
taking
all
these
sources
destinations,
make
the
download
much
much
simpler.
C
Have
their
release
notes
basically
available
on
the
front
page,
so
you
can
immediately
get
to
them
and
then,
from
here
being
also
able
to
if
we
go
oops,
let's
go
back
to
the
home
page
yeah,
understanding
who's
using
it,
some
more
type
of
feature,
characteristics,
they're,
showcasing,
hey
we're
a
cncf
project,
give
us
a
star
if
you
can,
which
is
always
appreciated
some
nice
graphics,
about
like
what
the
configuration
looks
like.
C
Maybe
you
have
to
update
it
for
the
ammo
stuff
with
1.9,
and
then
you
know
just
some
of
the
other
features
and
and
your
typical
stuff
that
comes
with
with
any
product
page
how
it
works
here.
We
we
spent
a
little
time
just
to
try
to
illustrate
a
little
more
of
how
do
you?
How
does
this
actually
function
so
before
you
download
or
do
anything
you
get
a
little
bit
of
understanding
of
like
hey,
you
can
scrape
metrics,
you
can
automatically
tag
them.
You
can
call
and
filter
things
with
apis.
C
You
can
resume
data
where
you
left
off
and
you
can
send
data
to
you
know
five
or
six
more
locations
all
with
a
little
bit
of
history
there
as
well
yeah.
We
have
our
community
side,
so
all
of
that's
available
kind
of
slack
is
the
forefront
there,
our
blog
this
is.
This
is
a
a
much
needed
update,
and
so,
if
you
have
a
blog
post
that
you
want
to
post
on
fluent,
we
now
have
a
a
better
way.
I'd
say
better
infrastructure
to
kind
of
support
that,
and
this
will
help
you.
C
So
if
you
want
to,
you
know,
read
a
blog,
it's
in
it's
in
a
little
bit
more
of
a
digestible
format,
so
yeah,
that's
a
little
bit
of
the
the
new
web
page
should
go
live
here
shortly,
and
you
know
after
that,
we'll
keep
improving.
D
B
C
Yeah,
it's
always
it's
always
hard
to
to
kind
of
rebuild
the
website,
because
you're
not
sure
what
you
want
to
add
in
what
you
might
want
to
keep
out-
and
you
know,
are
you
getting
the
message
of
hey
is
this:
this
is
what
the
open
source
represents,
and
is
that
going
to
be
the
most
useful
for
the
community,
so
I
think
we
finally
got
to
a
good
good
place
there
and.
A
C
By
the
way,
so
the
great
thing
about,
like
all
cncf
projects
is
the
websites
are
typically
open
source.
So
if
you
see
something
in
there,
you're
like
how
do
they
do
that?
It's
it's
a
hugo
template
all
the
source
code.
Is
it's
going
to
be
available,
so
you
can
modify
and
then
we
just
quickly
deploy
it
there
too,.
B
Okay,
great
so
fluent
partnership
with
open
search.
This
is
a
big
one.
I
don't
know
if
tim
wants
to
take
over
that
one
yeah
tim.
You
want
to
talk.
A
That's
great
I
get
called
out
just
because,
because
I
called
pat
out
now
yeah
so
now
this
the
news
came
out
last
week
and
we're
really
excited
about
this
partnership.
Open
search
has
a
lot
of
traction.
A
It's
a
lot
a
lot
of
functionality,
it's
a
very
powerful
tool,
working
with
with
aws
they've
partnered
with
us
to
create
connectors
for
fluid
bit
and
fluid
d,
and
so
anyone
who
uses
open,
search
or
uses
fluid
bed
and
fluid
b
will
now
have
a
first-class
connector
to
to
ingest
data
which
just
makes
that
complete
flow
from
data
source
to
dashboard.
A
Almost
seamless
at
this
point,
so
there'll
be
a
lot
of
news
on
on
when
these
these
products
are
available.
The
fluentd
actually
connector
is
available.
Now
fluent
bit
will
come
out
with
the
release
of
fluent
bit
1.9
within
a
week
or
two
and
everyone's
going
to
be
able
to
ingest
data
with
from
bit
right
into
open
search.
So
we'll
we'll
have
more
details,
get
started
guide
produced
with
the
open
search
team.
A
There's
news
too,
we'll
we'll
be
with
the
open
search
team
at
fluentcon
and
we'll
have
more
details
about
this.
Coming
up.
In
may
so,
we're
gonna
do
a
lot
of
enablement
around
this.
This
partnership,
but
we're
really
excited
about
technology,
excited
about
the
partnership
and
excited
to
get
your
feedback.
C
Yeah
yeah,
I
think
that's
that's.
The
super
important
piece
is
like
these
things
will
continually
evolve
and
it's
not
like
one
one
organization
or
one
project
will
define
what
the
other
does.
It's
like.
We
really
need
users
to
kind
of
come
in
and
say
like
hey.
This
is
this
is
what
I
want
to
see.
This
is
how
I
anticipate
it
looking,
and
this
is
the
use
case,
so
I
would
recommend,
if
you
know,
you're
using
one
of
the
two
technologies.
You
want
to
see
something
specific
like
yeah.
C
B
B
Example,
the
fluent
data
fluid
the
elastic
search
connector
and
it
works
with
open
search.
Why
do
I
need
a
new
one?
The
thing
is
that
a
elastic
search
as
a
pro
will
have
its
own
development
path.
Right.
It
will
add
more
features,
it
might
add
breaking
changes
or
anything
like
that,
so
our
native
connector
for
elasticsearch
will
be
adapted
for
elasticsearch,
but
at
some
point
we
need
to
provide
the
users
who
use
open,
search,
something
that
is
tested
reliable,
that
runs
behind
us,
say
a
ci
ready
for
production
right.
B
So
at
some
point
you
might
expect
that
this
is
a
nice
speculation
right
that
elasticsearch
would
take
one
path
and
open
search
would
take
another
direction
in
future
set.
Maybe
I
don't
know,
protocol
change
or
anything
like
that.
So
from
that
perspective
makes
sense
to
have
a
some
native
connector
instead
of
something
that
also
works
with
right.
That
is
the
biggest
difference.
Sometimes
we
got
some
users
that
say
hey.
I
can
connect
with
splunk
by
using
the
http
plugin
right
by
using
a
lua
filter,
pass
another
filter,
merging
the
fields
and
yeah.
B
D
Like
long
long
time,
most
of
my
professional
career,
I've
been
working
on
red
hat
platforms,
the
last
sort
of
three
four
years
open
shift
and
that
kind
of
domain
and
red
hat's
approach
to
to
like
containers
and
supply
chain
securities
to
very
strongly
encourage
you
to
be
using
special
red
hat,
certified
containers
not
stuff
straight
out
of
dr
hubbell
or
whatever,
but
through
an
approved
red
hat
registry,
and
so
when
I
joined
calipso,
one
of
the
first
things
I
did
was
was
initiate
that
process
of
getting
a
a
red
hat
container
approved
it's
not
too
difficult.
D
I'd
already
done
it
previously
at
a
previous
role,
but
this
was
about
getting
an
official
open
source
version
just
through
and
and
available
in
red
hat
container
catalog.
So
we've
got
that
now
and
I'll
just
show
you
a
few
of
the
bits
and
pieces,
but
ultimately
it
is.
Where
is
it
gone?
D
It
is
it's
just
the
open
source
upstream,
open
source
bit,
we've
just
forked
it
into
a
calypso
repo,
because
for
the
certification
process
it
has
to
go
through
as
a
you
know,
as
us
as
a
technical
partner.
So
we
initiate
certification,
it's
a
collector
image,
but
it's
just
the
the
upstream
open
source
code.
So
I
thought
the
repo
there's
a
very
simple
ubi
base
image
so
ubi
is,
is
universal
base
image,
so
it's
red
hat's
take
on
on
an
on
a
container
based
image.
D
So
it's
using
essentially
a
red
hat
based
image,
but
ubi
you
can
redistribute
and
you
can
reuse
in
in
open
source
as
well
and
in
fact
now
you
can
start
pushing
it
to
the
club
as
well,
which
you
might
start
doing
soon.
So
this
is
yeah.
This
is,
but
if
you've
looked
at
any
of
the
docker
files
for
for
fluid
bit,
it
will
look
very
similar.
D
We've
just
got
the
red
path
base
image
and
then
we
go
and
grab
the
table
for
the
particular
version
we
want
and
build
it
all
exactly
the
same
way
and
and
use
it
exactly
the
same
way.
There's
no
real
difference
here,
there's
a
few
little
special
bits
and
pieces.
You
have
to
do
for
certification,
like
adding
licensing,
making
sure
you're
not
running
as
a
root
user,
those
kind
of
things,
but
that's
pretty
straightforward,
and
then
we
initiate
the
certification
process,
which
is
a
few
things.
D
It
builds
the
image
for
us
scans
it
through
some
automation,
tooling,
some
ci
stuff
like
that,
and
there's
also
like
a
load
of
licensing
paperwork.
You
have
to
fill
out
and
stuff
like
that,
so
just
to
make
sure
that
you
know
everything's
by
the
book
and
and
the
kind
of
users
who
want
that
kind
of
safety,
net
and
security
that
they
get
from
red
hat.
To
say
you
know
this.
This
image
is
okay
to
use
and
actually
we've
we've
got
this
image
now
in
in
in
the
red
hat
container
catalog.
D
So
anyone
can
pull
it.
The
only
thing
I
need
to
do
is
update
it
to
the
latest
version.
I
just
realized
as
soon
as
I
decided
to
demo
this
today,
but
yeah.
D
Yeah,
because
there's
not
a
huge
amount
in
it,
so
yeah
it's
pretty
pretty
straightforward,
so
yeah,
so
we've
got,
we've
got
our
product
there.
You
can
actually
find
it
in
by
searching
for
fluent
bits
and
there's
also
a
little
bit
image
that
I
made
for
red
hat
there
as
well,
and
I
actually
I
knocked
up
a
blog
post
as
well
on
on
using
it,
and
this
was
primarily
you
know.
I
had
to
make
this
image
it
got
published,
and
then
I
wanted
just
to
make
sure
it
was.
D
You
know
working
properly,
because
you
know
we
don't
want
to
tell
people
to
use
it
and
there's
something
wrong
with
it,
but
I
just
captured
a
few
bits
of
the
output.
While
I
was
doing
that
it's
a
very
simple
test
and
I
I
cut
a
few
corners
on
it,
not
with
what
I
was
testing
but
but
how
I
was
doing
it
red
hat
provider,
a
monitoring
stack
already,
that's
got
fluentd
in
there
there's
a
lot
of
security
aspects.
D
You
have
to
consider
with
some
of
the
red
hat
deployments
as
well
and
getting
access
to
the
logs
and
things
like
that.
So
what
I
actually
do
in
my
testing
is,
I
use
the
the
normal
fluency
log
collection
and
forwarding,
but
I
forward
it
to
our
fluent
bit
image
and
then
I
use
that
to
forward
it
elsewhere.
So
I
can
prove
that
that
fluent
bit
can
receive
stuff
and
it
can
send
stuff,
and
that
was
the
main
test
case
for
it.
D
So
there's
a
bit
of
a
blog
post
that
goes
through
it
all
and
how
we
did
it
in
some
of
the
tooling
we
used
as
well.
But
that's
you
know
the
gist
of
it.
Ultimately,
it's
a
it's.
A
red
hat,
certified
image,
there's
a
few
other
benefits
from
that
recently.
There
was
a
request
for
fips
compliance
and
those
kind
of
bits
and
pieces
as
well
and
there's
some
issues
with
that.
D
With
with
the
current
ubuntu
images,
because
the
openssl
version,
I
don't
think
it's
certified
to
the
latest
version,
whereas
the
ubi
images
are
all
they've
got
back,
ported
flips
compliance
for
for
open
ssl,
so
it
might
be
we
can
do.
We
can
do
something
in
that
domain
if
we
need
to.
If
someone
really
wants
like
a
fips
compliant
container
image,
we
can
we
can
build
it
based
on
the
ubi
one
and
we
can
push
that
to
docker
hub
as
well,
because
there's
no
problem
with
distributing
it.
D
That
way,
it
just
hasn't
been
done
and
the
magnet
is
just
in
the
red
hat
one
and
of
course
it's
all
like
open
source
as
well.
You
know
actually
to
license
so
anyone
can
pull
it
as
well.
D
B
Great-
and
this
is
part
also,
how
do
we
kalitia
we?
We
continue
embracing
these
partnerships
with
ecosystem
right,
it's
not
just
about
just
a
product.
We
also
invest
a
lot
of
in
in
open
source.
As
we
said
at
the
beginning,
we
care
about
a
lot
of
open
source,
which
is
our
dna
developer,
toolings,
but
also
enterprise
products,
to
pay
the
bills
right
and
having
this
integration
was
this
full
certification
for
openshift
open
some
new
possibilities
where
our
users
could
not
sometimes
deploy
this
technology,
because
this
lack
of
a
certified
container
image
right?
B
So
now
we
have
as
a
working
as
a
company.
We
have
we
got
at
that
point
and
we're
going
to
continue
integrating
with
other
ecosystems
even
from
open
source,
but
also
kubernetes
distributions,
or
whatever
is
needed
to
run
this
technology
in
in
production.
I
think
that
we
have
just
four
minutes
left
just
a
last
message
that
is
fluent
talks
with
happens
every
friday
at
at
the
same
time,
right
we
will
have
different
people
talking
about
not
just
about
what
we
are
doing,
but
also
sharing
knowledge
about
different
areas.
B
We
have
experts,
for
example,
in
cicd,
go
and
typescript,
and
we're
going
to
try
to
make
room
for
them
too.
Also
to
that
you
can
share
general
knowledge,
so
you
can
also
learn
from
these
sessions
on
friday
and
yeah
have
a
free
talk
and
we
would
like
to
be
this
be
a
lot
of
with
interaction
right
through
the
chat,
so
yeah
looking
forward
to
meet
our
community,
and
I
don't
know
if
you
want
to
add
anything
else.
C
Come
join
us
ask
us
questions
and
tell
us
things.
They'll
tell
us.
Something's
awesome
tell
us
something
sucks,
it
doesn't
matter
where.
A
B
B
And
you
will
be
ready
to
hear
from
you
actually,
we
are
collected
as
a
fully
distributed
company.
So
actually
we
are
a
small
company
of
15
people
that
are
located
in
10
countries,
including
north
america,
central
america,
south
america,
europe,
asia,
everywhere
right
but
join
us
from
uk
and
okay.
B
Yeah,
I'm
in
costa
rica.
This
is
3pm
and
yeah.
So
if
you
think
that
you
are
a
very
proactive
person
that
you
like
to
work
in
open
source,
yeah,
just
being
us
send
us
not
your
resume
your
github
profile
and
we
take
it
from
there
cool.