►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
B
Welcome
all
to
the
foundational
infrastructure
working
group
meeting:
Carson:
do
you
have
anything
you
wanted
to
discuss
before
we
dive
into
Bosch
related
things.
D
I
didn't
come
here
with
anything
in
particular
to
discuss.
I
guess
of
relevance.
Is
there's
been
some
syslog
issues
asking
about
integrating
fluent
bit
to
cover
some
of
the
deficiencies
of
of
Black,
Box,
yeah
I.
Think
well,
there's
definitely
some
Merit
there,
given
that
we're
trying
to
push
otel
on
the
ARP
logging
metrics
side,
I'm
wondering
if
there's
a
way
to
bring
those
two
proposals
together
and
use
and
kind
of
it's
been.
It's
been
hard
to
figure
out
in
my
head.
D
So
this
is
just
a
vague
notion,
but
perhaps
we
could
use
the
hotel
collector
for
the
purpose
proposed
of
fluent
bit
to
forward
stuff
to
syslog
as
well.
E
D
Guess
the
yeah,
the
theoretically
black,
the
black
box,
part
of
syslog,
is
meant
to
copy
any
Bosch
boss
created
logs
of
RV
cap
syslog
and
forward
them
to
the
the
running
syslog
instance.
Conceivably.
D
We
could
just
remove
that
from
the
syslog
release
and
move
it
to
under
the
purview
of
the
arp
working
group,
with
the
hotel
collector,
for
instance,
in
the
future.
D
Definitely
yeah
I
I
think
someone
involved
in
this
reached
out
to
me
with
a
in
in
the
fluent
bit
when
someone
involved
I,
don't
know
if
it
was
Karsten
or
someone
else
reached
out
and
dm'd
me
and
I've
been
talking
with
them
about
the
hotel.
Collector
I
should
move
that
to
the
public
issue
at
some
point.
Okay,.
B
Bayonne,
do
you
think
Carson
should
join
this?
This
discussion
as
well.
The
RFC
discussion
is
that
what
you're
referring
to
he's
not
currently
in
there.
A
D
I
drop
a
comment
in
the
issue:
I
can
tag
Carson
and
Max
and
point
them
in
the
RFC,
but
then
they
can
choose
whether
to
go
there.
A
Tried
to
reach
everyone
inside
the
sebik
who
can
get
feedback
regarding
the
RFC
as
I
have
to
promote
it.
So
that's
why
we
have
some
comments
but
yeah,
okay,
I'm,
not
sure
about
carlston,
because
he's
in
a
different
department.
B
D
I,
don't
think
so.
If
y'all
can,
if
anything
either
then
I
might
just
drop
okay.
B
B
It
has
two
approvals,
so
it
can
be
merged.
My
question:
do
we
need
to
do
some
coordination
like?
Does
it
make
sense
to
have
this
Auto
bump
already
when
it's
not
being
consumed
just
because
this
will
make
it
end
up
in
the
release
right.
E
Yeah
but
I
mean
we
are
currently
implementing
to
consume
it
right
and
therefore
it's
harder
to
consume
it.
If
it's
not,
if
it's
not
really
not
not
there
already
right
and
we
played
it
through
I
guess
with
some
local
builds
but
Felix.
You
are
more
in
the
details
here
right,
but
now
we
want
to
get
it
in
and
therefore
first
step
is
to
to
bump
it
right
and
then
to
consume.
It.
B
I,
it's
just
there's
a
more
PR's
that
gonna
be
needed
to
be
merged
right
and
this
one
is
ready
to
be
merged,
but
maybe
it
makes
exactly
to
merge
them
all
at
the
same
time,
so
that
we
end
up
with
one
release
with
the
director
where
this
stuff
is
all
in
and
that
we
are
not
like
having
an
intermediate
director
release
where
there
is
a
CLI.
But
it's
not
functional.
E
B
Because
I
just
it
should
be
on
Bosch
right.
Yes,.
B
There
were
also
PRS
for
that
right.
Look
up.
A
E
B
And
we
would
need
to
it
doesn't
make
sense
to
bump
this
into
the
stem
cell
without
the
agent
right.
We
need
an
agent
version
First
that
has
the
support
for
it,
then
that
version
needs
to
will
be
Auto
bumped
in
the
stem
cell,
and
we
want
to
at
the
same
time,
have
this
one
available,
but
there's
a
bit
of
coordination
again
yeah.
We
don't
have
to
coordinate
that
one
with
the
director,
but
we
need
to
coordinate
this
one
with
the
agents.
E
C
G
A
Should
we
use
some
labels,
do
we
have
walked
level
and
comments
can
get
lost
in
between.
B
A
B
B
E
E
A
B
C
E
We
did
so
so
today,
yeah
yeah,
okay,.
F
Getting
there's
also
just
one
remark
from
Max
I
remember
this
morning
that
he
mentioned:
maybe
we
first
bumped
like
the
pipeline
and
then
do
this,
so
we
have
can
create
a
new
release
for
the
Azure
storage.
G
F
C
C
B
And
these
two
should
be
moved
to
a
different
working
group.
Beyond
that
you
get
around
to
you
were
going
to
create
a
PR
for
that
right.
B
Okay,
let's
see
if
there's
issues.
B
C
B
Yeah
but
like
for
the
initial
PR,
we'll
be
good
to
it's
probably
possible
through
the
API
to
figure
out
like
for
the
list
of
foundational
infrastructure
working
group
repos,
which
ones
are
using
a
deployment
key
and
then
put
them
to
the
put
those
in
the
exception
list.
And
then
we
can
work
with
the
repo
I.
Don't
know
ownership
whatever.
B
B
B
B
G
B
B
It
is
a
really,
what
do
you
say.
B
So
I'm
just
gonna
set
this
to
open
for
contribution.
For
now
we
can
close
it
in
a
bit.
B
B
B
B
Yes,
so
there
was
a
bit
of
a
bombshell
here,
so
there
is
some
implementation
for
pcapp
to
do
packet
capture
in
a
distributed
manner.
And
then
we
have
a
bit
of
a
discussion
during
this
during
CF
day
on
how
to
like
I
think
this
started
as
a
request
for
like.
Could
we
add
a
I,
don't
know
a
plug-in
something
to
the
brush
your
life
or
that
people
could
add
additional
functioning
edited
to
the
grocery
lion
and
we
started
talking
it
turned
out.
B
It
was
for
this
and
then
the
idea
was
raised
of
like
maybe
we
can
just
integrate
it
in
the
director
or
in
the
board
CLI
and
have
it
available
for
everybody
and
yeah.
Then
this
RFC
was
created
and
then,
while
reading
through
it
I
started
to
look,
it
occurred
to
me
that
it
looked
like
really
similar,
like
basically
it's
streaming
logs
from
different
instances.
B
Right,
that's
in
the
end,
what's
happening
and
I
remember
that
we
already
have
something
like
that
in
the
the
bar
CLI,
where
we,
where
you
can
do
it,
the
Dash
F
I
think
to
follow
logs
to
tell
the
logs
of
the
different
instances.
B
So
I
was
wondering:
how
does
that
actually
work,
and
then
it
turns
out:
that's
just
going
over
SSH,
so
it's
just
doing
a
Bosch
SSH
under
the
hood
and
doing
some
like
magic
and
then
just
using
that
to
stream
the
log
from
all
the
instances
to
the
boss,
CLI,
which
actually
isn't
that
bad
for,
like
a
for
like
for
what?
For
how
often
you
would
be
using
this
right,
it
is
really
useful
if
you
need
it,
but
it's
you're
not
going
to
be
using
this
daily
right.
B
Sometimes
you
have
like
a
really
specific
issue
and
then
it's
really
useful
to
do
to
to
be
able
to
do
a
distributed
package
capture,
but
having
a
running
system
or
like
a
running
API
and
a
different
agent
on
all
the
instances.
It's
a
bit
much
yeah,
so
yeah.
G
B
B
There's
even
more
comments:
oh
wait,
yeah!
This
is
oh
I,
see
I,
haven't
read.
G
G
B
G
B
Enough,
yes,
lots
of
I,
don't
know
how.
How
do
we
go
for?
Where
do
we
go
from
here
right
if
it's
all
good
arguments,
but
it's
basically
like
so
I
I,
really
understand
why
you
need
all
of
this
for
when
you're
doing
platform,
things
right
when
you're
building
a
platform-
and
you
want
to
offer
this
as
a
functionality
to
to
end
users.
B
Then
you
would
want
to
do
really
robust
engineering
and
everything,
and
it
should
scale,
but
on
the
bar
side,
how
often
would
you
be
using
this?
Can
we
justify
adding
all
this
complexity
for
packet
capture
on
Bosch,
VMS,
I'm
I'm,
not
sure
like?
That
would
be
the
thing
to
to
figure
out
I
guess
like
how?
How
much
is
this
used?
How
much
is
there
a
need
for
this.
B
Because
it
it
sounds
like
the
complex
case
is
going
to
be
built
anyway,
right
because
it's
needed
for
CF,
which
is
fine,
I,
think,
but
then
just
using
all
that
complexity
on
the
bar
side,
as
well,
just
to
keep
the
code
bases
the
same
I,
don't
know
if
that
makes
sense
right.
It
could
also
be
that
we
just
use
one
bit
of
it
right.
There's
one
common
part
where
the
I
don't
know
deduplication
or
something
is,
is
happening
as
a
gold
package
in
the
CLI
and
that's
it.