►
From YouTube: Quilkin Monthly Project Sync - September 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
Sorry
I
was
trying
to
find
the
enemy
button
yeah.
This
may
actually
be
being
sort
of.
What's
the
word
superseded
by
some
stuff.
I
was
just
looking
at
the
recent
some
of
the
recent
changes
I
mean
so
fundamentally
yeah,
where
it
kept
what
it
came
down
to
was.
I
tried
to
well
I've
implemented
now
a
source.
C
Well,
basically,
it's
a
load
balancing
filter
that
does
a
kind
of
an
api
lookup,
but
obviously
to
do
that,
I
had
to
change
it
so
that
the
load
balancer
only
got
triggered
on
the
first
packet
of
every
flow,
because
otherwise
you'd
be
calling
the
api
consist.
You
know
repeatedly,
but
it
seems
like
you've.
Now
there
is
now
some
sort
of
caching
some
object.
Caching
stuff
in
there.
I
was
looking
at
that
latest
pr
mark.
I
think
he'd
sent
me
a
link
to
it.
C
Hadn't
even
got
around
to
putting
a
timeout
in
mine
and
it's
kind
of
neat
without
school
in
there
yeah.
Well,
because
what
I'd
end
up
doing,
as
I
say,
was,
was
changing
things
so
that
you
only
basically,
you
only
did
look
up
the
first
packet
in
the
flow
and
because
there
was
a
lack
of
obviously
you
know
in
terms
of
there
isn't
any
kind
of
global
variable
or
anything.
C
So
I
I
was
using
metadata
to
do
that,
basically
to
to
spot
the
new
flow
and
then
to
feed
through
that
there
was
an
existing
flow
from
the
catch,
and
what
I'd
done
was
I'd.
I'd
changed.
I've
made
it
so
that,
basically,
instead
of
spraying
packets
round
robin
across
the
threads,
I
was
hashing
across
them,
so
that
so
that
I
didn't
have
to
worry
about
memory
contention
between
the
between
the
flowcat.
You
know
with
flow
cache
and
multiple
threads.
C
D
Yes,
so
we,
I
guess,
like
all
all
threads,
that
we
use
to
process
packets,
will
access
the
same
cache
or
because
they
share
the
same
filter
chain.
Basically,
but
we
we
do.
We
do
use
a
lot
a
bit
of
like
read,
write
lock.
C
Yeah,
I
was
sort
of
too
lazy
to
do
that,
but
also,
I
guess
I
felt
like
you
actually
possibly
get
better
performance.
If
you
don't
do
that,
if
you
have
a
separate,
you
know
if
you
can
hash
across
available
threads
and
then
have
a
flow
cache
per
thread.
You'll
probably
actually
get
better
performance
because
you'll
have
better
memory
locality
and
avoid
any
delays
due
to
locking
which
then
I
guess
made
me
start
thinking,
because
you
already
have
kind
of
a
session
concept
in
quilting.
Don't
you
but
that's
more
for
the
returned
packets.
C
So
I
guess
that's
where
I
was
wondering
if
we
almost,
if
almost
you,
so
it
will
make
sense
to
bring
the
two
together
and
have
kind
of
a
session
concept.
That's
bi-directional
and
then
pin
session
effectively
pin
sessions
to
individual
calls
or
threads
so
that
you,
you
maximize
the
kind
of
locality
of
the
memory
as
you're.
Doing
that
I
don't
know
yeah
in
terms
of
performance.
I'm
not
sure
I
have
you
know
what
implications
that
are
and
equally,
I
suppose
you
don't
want
to
you
know
premise.
C
Your
optimization
is
the
root
of
all
evil
and
all
that-
and
I
guess
really
in
terms
of
discussion-
I
was
trying
to
figure
out
where
things
are
going
with
all
of
that,
whether
that's
the
longer
term
goal
to
have
a
kind
of
a
structure
where
you
know
you
only
trigger
things
like
the
load
balancer
once
per
flow
rather
than
on
every
packet,
and
I
know
there'd
been
some
discussion
on
that.
D
D
Yeah,
that
was
a
bad
idea.
I
guess,
like
the
interesting
part,
would
be
like
how
to
figure
out
what
a
session
is,
because
to
to
get
like
a
session.
You
need
a
source
and
a
destination
and
get
the
destination.
You
sort
of
need
to
know
something
about
the
packet
and
unless
we
use
some
sort
of
filter
or
user,
unless
we
have
some
way
to
run
some
user
code,
it
will
be
hard
to
get
through
that
part
before
we
can
hash
to
any
thread.
I
guess
that's
that's
where
right
and
we'll
we'll
need
to
do
something.
C
Yeah
and
all
I'd
done
was
just
basically
a
hash
to
to
assign
packets
to
the
worker
thread
and
then
within
the
working
thread.
I
guess
I
was
just
running
a
cache
that
was
per
per
source
source,
ip
and
port
because
obviously
destination
wise
on
that
side
of
the
proxy.
The
desktop
imports
always
the
same,
and
then
when
it
was
a
new
flow.
C
I
was
running
the
load
balance
and
once
I'd
assigned
an
endpoint,
I
then
effectively
remembered
that
in
the
cache-
and
presumably
you
could
use
your
ttl
thing
for
that,
because
it's
only
obviously,
if
you,
if
you
time
something
out,
if
anything,
you
know
it's
only
a
performance
hit
anyway,
because
you
still
get
the
same
response
from
the
api.
C
So
it's
not.
It's
not
a
big
deal.
C
Didn't
have
a
timeout
and
you
just
aged
it
out
you'd
be
fine.
The
I
guess.
If
you
wanted
to
tie
return
packets
into
the
session
you'd
have
to,
I
guess:
yeah!
That's
where
destination
would
come
in
because
I
guess
you'd
be
hashing
stuff
that
was
coming
back
from
the
endpoint.
E
Yeah,
I
think
not
on
the
load
balancing
thing,
but
on
your
question
about
how
do
we
by
having
the
load
balancer
in
the
filters
or
having
the
session
data
and
stuff?
I
don't
know
if
you
saw,
I
had
a
let's
talk
about
entity,
components
not
to
get
into
all
that,
but
like.
I
think
that
is
that
sort
of
ties
into
it
of
like
having
being
able
to
have
a
filter
that
can
update
one
component
and
not
interfere
with
computing.
E
Another
component,
so
that
we
could
have
like
mark,
has
an
issue
there
where
it's
like
content,
filtering
versus
routing,
like
with
a
entity
component
system.
You
could
have
two
filters
that
say:
oh,
I
only
get
these
components
and
then
I
get
the
data
and
as
long
as
those
two
things
don't
touch,
they
can
run
in
parallel.
E
Okay,
I
think
I
think
I
think
that
concept
in
general
of
moving
towards
the
entity
stuff,
because
we
have,
I
think,
a
lot
of
where
our
performance
is
coming
from
and
like
a
lot
of
times
being
spent.
We
recreate
the
context
object
because
context.
Object
is
pretty
heavy
yep
and
if
we
move
to
a
entity,
but
just
to
be
clear,
I
want
to
just
I'm
not
just
saying
words:
how
familiar
and
comfortable
is
everyone
with
entity
component
systems.
E
Yeah,
so
if
essentially,
if
you
think,
if
we
have,
if
you've
seen
the
read
context,
object
in
quilting,
for
example,
you
can
think
of
each
of
those
fields
as
a
component
of
the
context
in
general
and
what
ecs
entity
components?
Does
this
sort
of
separates
having
an
instance
of
something
from
having
the
component?
E
So
when
you
allocate
a
new
context,
instead
of
getting
a
whole
context,
object
back,
you
would
only
get
an
integer
that
essentially
points
to
that
context
later
right
and
then
you
use
this
entity
to
say
hey.
I
want
this
component
this
and
this
amount
like
okay.
I
want
the
example
I
have
to
say.
E
You
can
have
all
those
things
stored,
essentially
sequentially
all
together
and
you
don't
have
to
have
like
okay,
read
context.
Object
then
another
context,
object,
then
another
one.
So
that
gives
you
that
it's
like
use
a
lot
in
games
like
this
is
a
very
game,
specific
thing,
but
I
think
it
has
like
there's
a
lot
of
it's
like
absolute
application.
E
C
E
B
C
C
Because
I
had
to
yeah
as
it
was,
I'd
had
to
make
about
two
or
three
hundred
lines
of
changes
to
do
it.
Fortunately,
due
to
the
four
of
rust,
it
worked
first
time,
it's
miraculous,
isn't
it.
It
really
is.
C
It
took
me
ages
to
figure
it
out,
but
once
I
did
it,
the
damn
thing
worked
first
time
around,
but
yeah
I'll
give
that
a
go,
but
it
sounds
like
yeah.
It
sounds
like
a
better
way
of
doing
it
than
I
thought
of.
So
that's
all
good.
E
I
think
it'll
take
us
a
while
and
we
have.
We
want
to
get
load
balancing
first,
so
I
don't
know
when
we
would
actually
get
to
having
a
nice
ecs
system
for
quilting,
but
it
would
certainly
be
nice
like
to
have
the
way
I
would
think
about.
It
is
like
essentially
like
the
data
pipeline
is
the
world
and
then
the
messages
essentially
come
in
and
they
get
the
commands
attached
and
then
they
you
essentially
have
a
bit
of
a
state.
That's
like
okay.
Now
all
the
filters
have
run
over
it.
E
This
message
is
clear:
we
can
remove
it
from
the
world
now
and
then
we
send
it
back
to
the
to
the
destination.
A
A
Actually
a
good
thing
that
you're
here,
because
I
wanted
to
actually
talk
about
this
ages
ago.
Aaron
you
put
forward,
let's
make
filter
an
async
trade
and
at
the
time
I
think,
both
of
when
you
are
like,
and
do
we
really
need
to
do
this
and
then
charles
did
a
bunch
of
work
and
it
was
like
this
would
have
been
really
nice
to
have.
C
E
C
You
can
only
have
one
thread.
One
green
thread
active
when
you
do
that
which
is
a
bit
of
a
pain,
but
I
ended
up
forking
another
os
thread
to
handle
my
api.
C
The
ecmp
one
was
pretty
trivial,
because
it's
inherently
stateless-
or
at
least
it
is,
if
you
assume,
the
list
of
endpoints
doesn't
change,
whereas,
whereas
this
one
involves
whole
whole
heap
of
state.
A
D
C
C
C
It's
like
yeah,
it
was
yeah,
it
was
this.
It's
probably
four
lighting's
got
messed
up.
A
E
It's
like
it's
very
slightly
more
often
a
bit
later.
I
think.
E
E
You
can
see
like
in
the
254
one
generally,
that's
like
under
100
milliseconds,
where
it's
like
in
the
async
trade
one
it's
generally
like
after
100
milliseconds.
So
you
can
see
it's
like
very,
very
small
amounts
of
like
distribution
back.
D
Yeah
I
mean
I,
I
think
how
much
more
prefer
like
if
we
wait
for,
like
other
use
cases,
I
mean
I
get
that
we
have
this
like
in
this
case,
we
have
like
a
workaround
at
least,
which
would
be
somewhat
similar
to
what
we
would
run
into
anyways,
even
though
it's
a
bit
like
hacky
yeah,
because
I
feel
like
if
we
should
add
if
we
should
add
the
if
we
should
make
this
async
now
and
we
get
like
a
perfect,
then
it
affects
pretty
much
every
packet
that
runs
through
croaking,
regardless
of
whether
they
use
async
on
or
not
so
it'll
be
it'll
be
nice.
A
E
E
E
E
Well,
like
the
way
to
do
it
is
to
have
do
not
well
without
using
async
great.
Actually,
I
don't
think
that
works.
I
think
you
have
to
box
it.
If
you
could
not
use
you
have
to
box
it,
no
matter
what
it's.
B
E
I
mean
like
again,
if
we're
talking
about
performance,
I
think
this
is
like
another
thing
where
the
ecs
would
probably
offset
this
because,
like
again,
we
would
not
be
boxing
as
much
we
would
be
but
like,
if
you
in
the
code
sample,
I
shared
like
you,
don't
actually.
If
we
did
the
ecs,
you
wouldn't
be
returning
a
response.
Every
time
you
would
essentially
be
mutating
an
object,
so
there
would
be
less
if
the
future
would
not
be,
as
big
essentially
would
be
a
future
that
returns.
C
E
A
Yeah,
let's
write
up
I'd,
love
to
see
that
write
out
that
ticket
and
we
can
do
the
comparison
of
different
different
designs.
There,
I
think,
would
be
quite
interesting
what
that
would
look
like
as
well,
then,
if
we
find
when
we
like,
we
can
also
make
a
plan
about
how
we
want
a
refactor
to
get
there
possibly
over
time
as
well.
A
Oh,
this
is
gonna,
be
a
fun
one
380.,
which
is
the
next
one,
which
was
that's
excused
drop
when
full.
That's
an
interesting
discussion.
I'd
I'd,
love
to
see
this
be
like
a
like.
What
should
what
should
happen
when,
like
a
proxy,
gets
overloaded,
rather
than
sort
of
define
the
solution.
C
A
E
Well
just
have,
since,
since
I
opened
the
issue,
I
I
think
the
proxy
should
try
to
behave
as
normally
as
possible
under
load
like
it's,
essentially
the
proxy's
job,
to
take
the
load
and
essentially
shoulder
the
load
and
pretend
that
that
load
is
basically
non-existent
to
the
destination
and
always
ensure.
That's
you
know.
Even
if
there
we
don't
have
a
rate
limit
there
is,
it
is
not
being
overwhelmed.
It
is
not
actually
getting
de-dust.
E
And
like
it
should
it
should
endeavor
to
stay
up
during
that
time.
It
should
not
immediately
try
to
crash
out
and
then
restart
itself
to
start
a
new
one.
I
think
it
should
try
to
stay
up
as
long
as
possible.
A
A
What
was
I
gonna
say
so,
okay,
so
say
we're
talking
about
a
ddos
type
situation
or
it's
also
kind
of
fun.
It's
the
same
same
if
you,
if
you
end
up
connecting
too
many
clients
to
a
particular
proxy
which,
like
is
essentially
the
same
thing
anyway,.
A
E
Now,
right
now
we
will
get
to
a
certain
amount.
We
have
a
limit
in
a
queue.
There's
multiple
queues.
There
is
the
os
network
queue
the
actual
offer,
whatever
whatever
happens
there
and
then
like
on
the
quilting
user
space
side,
we
read
in
every
packet
as
soon
as
possible,
and
then
we
put
them
into
a
message
queue
and
if
we
hit
we
hit
that
limit,
I
believe
it
just
blocks.
D
Yeah
and
I
think
the
queue
is
like
really
really
small-
I
think
it's
just
like
one
one
packet,
each
like
you
can
buffer.
So
it's
like.
If
you
try
to
spam
with
lots
of
packet,
pretty
much
all
of
it
will
be
handled
by
the
os
before
we
have
a
chance
to
read
it.
So
most
of
the
dropping
will
happen
on
the
quest
level.
D
A
Now
find
where
it
is,
the
data
comes
here
proxy.
E
Actual
code,
first,
okay,
I
got
it
line
107
source
proxy
server.rs,
it's
a
1024
buffer.
There
are
the
sessions.
I
think,
on
the
other
side,
are
a
single
packet
size
in
in
session
manager.
There's
this
one.
A
Oh
packet,
the
packet
channel
is
the
it's
the
one
that
comes
back.
Yeah
yeah.
That's
that's!
That's
an
internal!
That's
not
a.
B
E
Like
data
comes
in
from
in
the
in
function
receive
from
receive
from,
but
that
passes
it
down
to
the
send
packets,
like
that's
a
loop
that
then
distributes
those
key
those
messages
over
a
bunch
of
workers
that
then
have
their
own.
That.
E
So
that
each
worker
has
like
1024
messages
that
they
cannot,
I
think
that's
what
you
might
understand.
D
Yeah,
so
on
on
the
receiving
side,
I
sent
a
link.
It's
we.
The
the
size
of
the
channel
is
pretty
much
the
number
of
threads.
We
have
so
yeah.
I
remember
this.
So
basically
we
have
the
one
task
that
reads
packet
of
the
socket
and
it
basically
just
rounds
robin
through
all
all
the
yeah,
all
the
workers
whenever
it
gets
a
packet
from
the
socket,
so
the
channel
is
that
big
and
then
yeah.
I
think
the
other
one
is
probably
in
the
other
direction.
D
B
E
Go
oh
yeah,
I
kind
of
forgot
what
was
the
point
there,
but
now
we
know
the
number
like,
I
think,
no
matter
what
it
is
like
it
will
get
overwhelmed
like
just
because
this
is
the
thing
you
put
in
from
video
games
and
people
did
us
video
games
for
like
no
reason,
so
it
will
just
always
happen,
so
we
just
need
to
or
we
need
to
have
a
strategy
for
ddos
whatever
that
may
be.
D
Yes,
I
think
like
on
the
application
level,
there
isn't
like
much,
we
can
do
if
you
have
enough
traffic
like
the
os
will
pretty
much
break
down
before
we
do
or
whoever
handles
the
packet
before
that.
But
I
on
cool,
can
on
the
cooking
side.
If
you
spam,
if
you
try
to
spam
it
with
as
much
packets
that
it
can
handle
or
more
than
it
can
handle
it
will,
the
current
behavior
would
be
that
it
will
drop
those
packets
and
it
will
work
as
normal.
D
Yeah
because
I
usually
use
the
you
know
the
I
sent
a
link
a
few
months
ago,
though
the
ce
library
that
I
used
to
spam
with
a
lot
of
things,
yeah
that
pretty
much
yeah,
that's
a
good.
A
A
What
was
I
going
to
say?
I
had
another
thought
that
this
that
was
actually
the
this
conversation
was
also
where
you
know
we
were
talking
about
having
a
global
rate
limiter
as
well
yeah,
because
you
were
talking
if
anyone
were
talking
about
like
you
change
things,
we
have
like
a
per
ip
connection
rate
limit,
but
I
was
like
well
if,
if
I'm
the
application
developer
on
the
game
server
and
I'm
like
I'm
going
to
have
to
pick
a
number
10
people
that
connect-
I
know
I
know
10
people
are
going
to
connect.
A
I
know
that's
what
that's
what
I
expect
anyway,
10
people
to
connect-
and
I
know
each
one
is
going
to
send
like
30
frames
like
30
frames,
a
second
if
I'm
going
to
get
hit
by
a
ddos
in
it,
because
suddenly
I'm
going
to
get
more
people
connecting
to
this
than
I
would
having
a
global
load.
Balancer
means
that
it'll
actually
mean
that
there's
only
a
certain
amount
of
throughput
at
any
given
point
in
time.
A
And
that
would
that
would
potentially
stop
culkin
from
maybe
like
falling
over
entirely,
because
you'd
have
a
global
load
balancer,
but
it
would
mean
that
even
if
players
were
connected
like
the
thing
would
be
unresponsive
anyway,
because
if
the
packets,
their
their
own
packets,
wouldn't
get
through.
So
I
don't
know
that
was.
That
was
the
impetus
for
that.
That
idea
of
a
global.
E
B
A
A
All
righty
anything
else
on
this
topic.
E
Kind
of
related,
I
just
wanted
to
bring
up
to
you
emma,
because
he
said
you
were
working
on
that
firewall
thing:
oh
yeah
and
related
to
like
having
global
rate
limiters
and
doing
networking
and
stuff.
I
don't
know
if
you've
ever
looked
at
container
network
interface.
E
It
is
a
abstract
network
interface
for
container,
essentially
what
we
were
talking
about
like
oh,
how
do
we
have
these
settings
for
like
we
want
the
container
to
just
block
just
drop
all
these
connections
from
these
ips
and
stuff,
and
it's
like
these
are
okay,
abstract
rules
for
having
a
container
do
that
stuff
it
doesn't.
It
doesn't
do
like
dynamic
updates
at
the
moment,
but
I
believe
for
doing
like
static
configuration.
E
Like
I
know,
linker
d
uses
has
a
cni
plugin,
for
if
you
want
to
have
it,
do
ip
table
rules
automatically
and
have
that
distributed
across
all
the
containers.
It.
A
B
E
And
you
can
have
cool
like
the
plug
the
cma.
I
don't
understand
all
because
he's
plugged
in
a
lot
for
a
lot
of
stuff,
but
essentially
you
have
that,
like
a
quilting
plug-in
also
plug-in
with
like
the
firewall
plug-in
and
have
all
of
them
work
together,
and
so
we
can
save
a
lot
of
time
implementing
that
functionality.
If
we
can
work
with
like
cni.
B
E
D
Yeah
because
that's
the
same,
I
guess
linkedin
is
using
it,
but
I
know
on
history,
so
the
idea
would
be
anything
that
comes
to
the
pod
gets
handled
by
the
proxy,
so
they
basically
just
rewrite
the
ip
table
rule
so
that
yeah
our
requests
go
through
the
proxy
service.
Knowing.
D
Yeah
it's
there
at
least
they
have
like
an
in
it
container.
So
that
runs
first
and
sets
up
all
the
ip
tables
things
and
then
afterwards
the
proxy
actually
runs
yeah.
I
guess
the
cni
part
is
a
bit
more
large
scale
like
maybe
that's
that
fits
more
into
how
parts
talk
to
each
other
rather
than
like
a
single
pod.
So
it
might
be
yes,
it's
relevant
for
us,
but
maybe
the
ip
tables
would
be.
A
A
Yeah,
let's
yeah,
please
yeah,
please
follow
that's
a
that's
an
interesting
one.
We
should
look
into
that.
I
like
that
idea.
C
Yeah,
there's
stuff,
you
can
do,
isn't
there
with
chaining,
cni's
and
stuff
like
that,
so
you
can
slide.
You
slide
an
extra
layer
in
or
whatever.
E
E
C
Yeah
I'll
play
with
it
a
bit
but
yeah
I
mean
like
yeah
so
to
say,
like
the
sgf
thing,
yeah
they
just
their
init
container,
sets
up
your
tables
rules.
But
then,
if
you
wanted
to,
I
guess
it's
been
a
bit
different
for
ddos,
presumably
because
you're
trying
to
set
them
up
on
the
fly.
E
C
A
Great
you
do
the
work
cool.
C
The
other
thing
I
wondered
about
the
detail
stuff
is:
is
there
any
sort
of
mileage
in
stuff
where
you
can
kind
of
say
well
if
we,
if
we
don't
intercept
existing,
so
I
guess
the
challenge
is
knowing
what's
a
valid
session,
but
but
you
know,
if
you
kind
of,
have
valid
sessions-
and
you
can
say
well,
let's
let
those
flow
through
as
normal
and
then
kind
of
rate
them
at
arrival
of
new
ones.
Yeah.
E
D
C
B
A
I'm
thinking
about
where
maybe
we
should,
sooner
or
later
start
thinking
about
sort
of
like
rules
and
actions
and
stuff
like
if,
if.
A
Yeah,
if
it's
overwhelmed,
do
this
or
let
people
like,
I
can
see
that
you're
coming
up
against
rate
limiting
here,
because
it
seems
like
you
need
us
to
like
tell
the
firewall
to
block
these
ip
addresses
at
a
higher
level
than
then
in
and
start
being
reactive.
That
way
well
yeah
sit
down
and
do
some
design
work
on
that,
because
I
feel
like
the
the
way,
I'm
thinking
about
cool
cleaning.
A
Those
personally
is
like
this
is
one
part
of
like
seven
different
things
that
will
be
part
of
an
abuse
system
and
so
being
able
to
collate
that
data
of
like
okay
like
who
is
this
coming
from?
Do
they
look
like
a
real
player?
Is
it
someone
we
trust
like?
Oh?
No.
It's
not,
therefore
like.
A
A
A
A
A
Yeah,
just
just
looking
at
some
old
stuff
here.
If
we
want
to
turn
that
data
pipeline
stuff
into
like
a
ticket
to
talk
about
through
either
data.
E
I'll
I've
been
on
holidays
for
the
past
month
so,
like
maybe
the
players
have
not
been
updated.
That's
a
while
so
but.
B
A
A
Yeah,
that's
good
actually,
for
for
me,
I
think
it
looks
like
the
clear
bought
history
on
new
build
seems
to
be
working
for
everyone,
except
affinity
who
has
blocked
the
bot,
which
is
kind
of
hilarious.
Apparently,
apparently
it's
annoying,
but
it
seems
to
be
working.
So
if
we're
happy
that
actually
I
might
just
merge
it
now
much
to
prove
it
now
I'll
update
the
branch
if
they're.
A
A
It
doesn't
work
quite
as
nicely
as
something
like
github
actions,
but
the
notifications
is
just
nice
because
it
can
help
people
how
to
find
things
if
they're
wrong,
but
we
could
also
set
it
up
so
that
it
only
pings
when
something
breaks
which
is
also
fine.
I
like
the
notifications
personally,
just
because
it
means
I
can
like
hit
a
button
and
then
run
off,
and
then
I
go
and
check
my
email.
A
E
A
A
A
Oh,
that
was
the
other
fun
one.
I
wanted
to
talk
about
this,
this
sort
of
lay
fellow
for
a
little
while
as
well
the
replacing
slot
with
tracing
that
seem
to
get
stuck
on
some
performance
stuff
and
it's
an
interesting
problem.
A
I
should
actually
write
a
note.
The
other
thought
I
had
here
was
to
maybe
go
straight
to
open
telemetry
leave
logging,
where
it
is
but
go
to
open
telemetry
for
the
distributed
tracing
side
of
things,
which
is
the
stuff
that
I'm
probably
more
interested
in.
I
don't
know
if
it
doesn't
necessarily
help
us
as
much
may
not
help
us
as
much
with
the
the
just
like
cpu
performance
or
like
like
local
testing
as
much,
although
it
may
anyway.
A
E
E
My
understanding
was
that
the
the
performance
issues
is
just
because
currently
it's
like
the
instrumentation
is
called
on.
Everything
which
is
just
yeah.
D
Yeah,
I
remember
it.
It
sounded
like.
If
we
only
wanted
to
use
this
for
like
login,
then
we
could.
Since
yeah
I
mean
we
usually
don't
vlog
a
lot
on
the
hot
on
the
critical
path
anyways,
it
was
mostly
having
the
yes
mentioned.
I
mean
the
instrumentation
on
like
each
filter,
read
or
write
function,
but
yeah
that
became
a
bit
expensive.
E
Yeah,
I
think
I
think
we
should
just
have
that
conflict.
You
know
like
so
that
either,
like
you
said
it
specifically
like
hey,
I
want
to
run
closing
with
instrumentation.
E
A
The
stuff
that's
in
my
head
with
this-
is
also
like.
It's
basically
distributed
tracing
as
well
like
being
able
to
across
all
the
proxies
visualize
like
down
to
the
player
level,
some
information
like
how
long
each
maybe,
how
long
each
one
took
or
to
process
in
entirely
or
what
what
weird
stuff
may
have
happened,
which
is
why,
like
the
trace
library
and
plugging
that
into
open
tracing,
I
think,
is
really
exciting
gives
them.
A
E
That
instrumentation
should
not
be
on
my
default.
It
should
be
behind
a
config
either
behind
like
a
default
or
a
debug
or
a
future
flag.
E
E
A
Because
we
we
track
metrics
and
I
actually
still
want
to
go
back
and
do
this.
We
track
metrics
about
like
like
on
performance
and
stuff.
But
if
you
want
to
dig
into
like
oh,
you
can
get
metrics
on
individual
ones,
but
if
you
want
to
get
like
per
request,
metrics
distributed
tracing
would
make
that
really
really
cool
and
really
really
handy
for
for
generating.
A
Yeah,
but
if
you
do
something
like,
if
you
do
like
open
tracing
and
you
go
to
a
distributed,
tracing
platform
like
like
something
open
source
like
yeager
or
like
weekly
stuff
in
stackdriver,.
A
There's
other
things
that
do
it
and
urban
telemetry
is
a
front
end
for
that,
then
you
can
start
to
get
down
to
like
the
per
request
level,
so
cool
stuff.
That
way,
I.
A
So
they'd
work
separately,
so
you
still
have
your
metrics
and
that's
fine
open,
so
we'd
keep,
I
think
eventually
so
open
telemetry
is
meant
to
cover
metrics
and
tracing
and
logging
eventually
for
rust,
though
all
it
really
does
is
tracing
with
its
library.
So
eventually
it
will.
A
It
should
cover
all
three
and
that's
the
goals
of
the
project,
but
like
all
open
source
projects,
it
doesn't
do
everything,
yet
the
metrics
library
is
still
alpha
from
the
last
time
I
looked
at
it
and
I
think
the
last
time
you
looked
at
if
any
was
missing
a
whole
bunch
of
functionality
from
memory
remember
correctly,
yeah
and
the
tracing
library
that
you
have
has
a
plug-in
for
open
telemetry,
which
is
really
cool,
so
you
can
like
trace
it
through.
So
I
would
say:
leave
leave
stuff
leave
metrics
in
prometheus.
For
now.
A
I
don't
think
we
would
change
any
of
that.
That's
fine,
but
if
we
want
to
do
yeah
for
tracing
like
we
can
use
this
tracing
library,
use
the
open
tracing
live
plug-in
and
that
I
think,
there's
probably
like
three
or
four
steps
that
are
hidden
in
there
behind
like
a
profit
thing
and
then
we
can.
We
can
hopefully
then
plug
that
into
one
of
the
systems
that
allow
us
to
do
distributed
tracing
so.
E
Cool
that
also
reminded
me-
I
don't
know
if
you
saw,
but
the
pion
developers
actually
got
a
working
example
of
webrtc
working
of
all
that
stuff,
so
we
might
now
actually
be
able
to
check
out
some
of
their
ad
libraries
and
maybe
use
some
of
that.
I
see.
A
E
E
A
A
E
A
A
Yeah,
that's
there's
yeah.
One
of
the
things
that
I've
always
had
in
my
head
is
the
goal
of
cooking
is
like
being
able
to
be
like
okay.
This
isn't
we're
going
to
make
this
easy
for
you
and
because
we
make
it
easy
for
you,
people
will
adopt
these
things
as
standards
or
more
people
will
adopt
these
things
as
standards.
A
B
A
A
Good
stuff
in
there
sweet
oh
before
we
go
just
to
remember,
I'm
taking
december
off,
so
I'm
just
giving
a.